Frontiers in Neuroinformatics
○ Frontiers Media SA
All preprints, ranked by how well they match Frontiers in Neuroinformatics's content profile, based on 38 papers previously published here. The average preprint has a 0.03% match score for this journal, so anything above that is already an above-average fit. Older preprints may already have been published elsewhere.
Gerkin, R. C.; Birgiolas, J.; Jarvis, R. J.; Omar, C.; Crook, S. M.
Show abstract
Validating a quantitative scientific model requires comparing its predictions against many experimental observations, ideally from many labs, using transparent, robust, statistical comparisons. Unfortunately, in rapidly-growing fields like neuroscience, this is becoming increasingly untenable, even for the most conscientious scientists. Thus the merits and limitations of existing models, or whether a new model is an improvement on the state-of-the-art, is often unclear.\n\nSoftware engineers seeking to verify, validate and contribute to a complex software project rely on suites of simple executable tests, called \"unit tests\". Drawing inspiration from this practice, we previously developed SciUnit, an easy-to-use framework for developing data-driven \"model validation tests\" - executable functions, here written in Python. Each such test generates and statistically validates predictions from a model against one relevant feature of empirical data to produce a score indicating agreement between the model and the data. Suites of such validation tests can be used to clearly identify the merits and limitations of existing models and developmental progress on new models.\n\nHere we describe NeuronUnit, a library that builds upon SciUnit and integrates with several existing neuroinformatics resources to support the validation of single-neuron models using data gathered by neurophysiologists and neuroanatomists. NeuronUnit integrates with existing technologies like Jupyter, Pandas, NeuroML and resources such as NeuroElectro, The Allen Institute, and The Human Brain Project in order to make neuron model validation as easy as possible for computational neuroscientists.
Subramanian, A.; Lan, H.; Govindarajan, S.; Viswanathan, L.; Choupan, J.; Sepehrband, F.
Show abstract
We present NiftyTorch a Deep Learning Framework for NeuroImaging. The motivation behind the development of such a library is that there are scant amount of centralized tool for deploying 3D deep learning for NeuroImaging. In addition, most of the existing tools require expert technical knowledge in Deep Learning or programming, creating a barrier for entry. The goal is to provide a one stop package using which the users can perform classification tasks, Segmentation tasks and Image Transformation tasks. The intended audience are the members of NeuroImaging who would like to explore deep learning but have no background in coding. In this article we explore the capabilities of the framework, the performance of the framework and the future work for the framework.
Raikov, I. G.; Milstein, A. D.; Moolchand, P.; Szabo, G. G.; Schneider, C. J.; Hadjiabadi, D. H.; Chatzikalymniou, A. P.; Soltesz, I.
Show abstract
Large-scale computational models of the brain are necessary to accurately represent anatomical and functional variability in neuronal biophysics across brain regions and also to capture and study local and global interactions between neuronal populations on a behaviorally-relevant temporal scale. We present the methodology behind and an initial implementation of a novel open-source computational framework for construction, simulation, and analysis of models consisting of millions of neurons on high-performance computing systems, based on the NEURON and CoreNEURON simulators (Carnevale and Hines, 2006, Kumbhar et al., 2019). This framework uses the HDF5 data format and software library (HDF Group, 2021) and includes a data format for storing morphological, synaptic, and connectivity information of large neuronal network models, and an accompanying open-source software library that provides efficient, scalable parallel storage and MPI-based data movement capabilities. We outline our approaches for constructing detailed large-scale biophysical models with topographical connectivity and input stimuli, and present simulation results obtained with a full-scale model of the dentate gyrus constructed with our framework. The model generates sparse and spatially selective population activity that fits well with in-vivo experimental data. Moreover, our approach is fully general and can be applied to modeling other regions of the hippocampal formation in order to rapidly evaluate specific hypotheses about large-scale neural architectural features.
De Schepper, R. G.; Abi Akar, N.; Hater, T.; Huisman, B. F.; D'Angelo, E. U.; Morrison, A.; Casellato, C.
Show abstract
A variety of software simulators exist for neuronal networks, and a subset of these tools allow the scientist to model neurons in high morphological detail. The scalability of such simulation tools over a wide range in neuronal networks sizes and cell complexities is predominantly limited by effective allocation of components of such simulations over computational nodes, and the overhead in communication between them. In order to have more scalable simulation software, it is therefore important to develop a robust benchmarking strategy that allows insight into specific computational bottlenecks for models of realistic size and complexity. In this study, we demonstrate the use of the Brain Scaffold Builder (BSB; De Schepper et al., 2021) as a framework for performing such benchmarks. We perform a comparison between the well-known neuromorphological simulator NEURON (Carnevale and Hines, 2006), and Arbor (Abi Akar et al., 2019), a new simulation library developed within the framework of the Human Brain Project. The BSB can construct identical neuromorphological and network setups of highly spatially and biophysically detailed networks for each simulator. This ensures good coverage of feature support in each simulator, and realistic workloads. After validating the outputs of the BSB generated models, we execute the simulations on a variety of hardware configurations consisting of two types of nodes (GPU and CPU). We investigate performance of two different network models, one suited for a single machine, and one for distributed simulation. We investigate performance across different mechanisms, mechanism classes, mechanism combinations, and cell types. Our benchmarks show that, depending on the distribution scheme deployed by Arbor, a speed-up with respect to NEURON of between 60 and 400 can be achieved. Additionally Arbor can be up to two orders of magnitude more energy efficient.
Lerma-Usabiaga, G.; Perry, M.; Wandell, B. A.
Show abstract
Reproducible Tract Profiles (RTP) comprises a set of methods to manage and analyze diffusion weighted imaging (DWI) data for reproducible tractography. The tools take MRI data from the scanner and process them through a series of analysis implemented as Docker containers that are integrated into a modern neuroinformatics platform (Flywheel). The platform guarantees that the entire pipeline can be re-executed, using the same data and computational parameters. In this paper, we describe (1) a cloud based neuroinformatics platform, (2) a tool to programmatically access and control the platform from a client, and (3) the DWI analysis tools that are used to identify the positions of 22 tracts and their diffusion profiles. The combination of these three components defines a system that transforms raw data into reproducible tract profiles for publication.\n\nGraphical abstractReproducible Tract Profiles (RTP) comprises a set of methods to manage and analyze diffusion weighted imaging (DWI) data for reproducible tractography. The RTP methods comprise two main parts. O_LIServer side software tools for storing data and metadata and managing containerized computations.\nC_LIO_LIClient side software tools that enable the researcher to read data and metadata and manage server-side computations.\nC_LI\n\nThe server-side computational tools are embedded in containers that are linked to a JSON file with a complete specification of the computational parameters. The data and computational infrastructure on the server is fully reproducible.
Taylor, J. E.; Kriegeskorte, N.
Show abstract
Deep neural network models (DNNs) are essential to modern AI and provide powerful models of information processing in biological neural networks. Researchers in both neuroscience and engineering are pursuing a better understanding of the internal representations and operations that undergird the successes and failures of DNNs. Neuroscientists additionally evaluate DNNs as models of brain computation by comparing their internal representations to those found in brains. It is therefore essential to have a method to easily and exhaustively extract and characterize the results of the internal operations of any DNN. Many models are implemented in PyTorch, the leading framework for building DNN models. Here we introduce TorchLens, a new open-source Python package for extracting and characterizing hidden-layer activations in PyTorch models. Uniquely among existing approaches to this problem, TorchLens has the following features: (1) it exhaustively extracts the results of all intermediate operations, not just those associated with PyTorch module objects, yielding a full record of every step in the models computational graph, (2) it provides an intuitive visualization of the models complete computational graph along with metadata about each computational step in a models forward pass for further analysis, (3) it contains a built-in validation procedure to algorithmically verify the accuracy of all saved hidden-layer activations, and (4) the approach it uses can be automatically applied to any PyTorch model with no modifications, including models with conditional (if-then) logic in their forward pass, recurrent models, branching models where layer outputs are fed into multiple subsequent layers in parallel, and models with internally generated tensors (e.g., injections of noise). Furthermore, using TorchLens requires minimal additional code, making it easy to incorporate into existing pipelines for model development and analysis, and useful as a pedagogical aid when teaching deep learning concepts. We hope this contribution will help researchers in AI and neuroscience understand the internal representations of DNNs.
Liu, Z.-Q.; Bazinet, V.; Hansen, J. Y.; Milisav, F.; Luppi, A. I.; Ceballos, E. G.; Farahani, A.; Suarez, L. E.; Shafiei, G.; Markello, R. D.; Misic, B.
Show abstract
Brain imaging is an increasingly inter-disciplinary field, encompassing multiple data types and multiple analytic traditions. Projects typically involve many moving parts, such as building customized preprocessing pipelines, transforming between data formats, preparing datasets for analysis, and ultimately displaying results. The field is conventionally built on highly specialized software packages that solve these individual challenges well, but are not necessarily designed to be interoperable. Trainees new to the field are therefore often left to come up with isolated heuristics and workarounds to complete a project. Here we present a way to navigate the increasingly complex informatics ecosystem of brain imaging. netneurotools is our labs internal Python toolkit that has been continuously developed and maintained by the labs trainees. The philosophy of the toolkit is that it should be the Swiss army knife of the lab: functions and routines that we often use but that are not part of any established pipeline or package. Since its inception, the toolkit has been open and welcomes contribution from neuroscientists across the globe. netneurotools presents a necessary counterweight to out-of-the-box software packages and highlights the importance of smaller, ad hoc functions for implementing projects. By opening a window into the inner workings of a lab, netneurotools also presents an opportunity to begin a new type of discourse among groups and establish tangible links within the community.
Bexter, A.; Kampa, B. M.
Show abstract
AO_SCPLOWBSTRACTC_SCPLOWIn the last 15 years, virtual realities have revolutionized behavior experiments in particular for rodents. In combination with treadmills, running wheels, or air-floating balls, the implementation of a virtual reality (VR) provides not only the opportunity to simultaneously explore behavior and neuronal activity in head-fixed animals under nearly natural conditions, but also allows full control over the visual sensory input presented to the animal. Furthermore, VRs can be combined with other sensory modalities such as auditory, tactile or olfactory stimuli. Despite the power of using VRs in animal experiments, available software packages are very limited, expensive and lack the required flexibility to design appropriate behavior and neurophysiology experiments. For this reason, we have developed the versatile, adaptable and easy to use VR environment MazeMaster, an open-source, Python-based software package for controlling virtual reality setups and behavior experiments. The software package includes a graphical user interface (GUI) and can be integrated into standard electrophysiology and imaging setups even by non-programmers. Ready-made behavioral experiments such as multisensory discrimination in T-mazes are already implemented including full control for reward supply and bias correction. For more individual setup designs, the modularity of MazeMaster allows more programming-affine users to extend the software with potentially missing features. With MazeMaster, we offer a free and easy-to-use VR controller that will facilitate the implementation of VR setups in scientific laboratories. In addition, MazeMaster allows the design and control of common head-fixed rodent behavior paradigms with extensive acquisition of meta-data required for reproducible VR experiments. The MazeMaster VR package, therefore, offers a collaboration tool for reproducible research within and across neuroscience laboratories according to the FAIR principles.
Klug, M.; Jeung, S.; Wunderlich, A.; Gehrke, L.; Protzak, J.; Djebbara, Z.; Argubi-Wollesen, A.; Wollesen, B.; Gramann, K.
Show abstract
Advancements in hardware technology and analysis methods allow more and more mobility in electroencephalography (EEG) experiments. Mobile Brain/Body Imaging (MoBI) studies may record various types of data such as motion or eye tracking in addition to neural activity. Although there are options available to analyze EEG data in a standardized way, they do not fully cover complex multimodal data from mobile experiments. We thus propose the BeMoBIL Pipeline, an easy-to-use pipeline in MATLAB that supports the time-synchronized handling of multimodal data. It is based on EEGLAB and fieldtrip and consists of automated functions for EEG preprocessing and subsequent source separation. It also provides functions for motion data processing and extraction of event markers from different data modalities, including the extraction of eye-movement and gait-related events from EEG using independent component analysis. The pipeline introduces a new robust method for region-of-interest-based group-level clustering of independent EEG components. Finally, the BeMoBIL Pipeline provides analytical visualizations at various processing steps, keeping the analysis transparent and allowing for quality checks of the resulting outcomes. All parameters and steps are documented within the data structure and can be fully replicated using the same scripts. This pipeline makes the processing and analysis of (mobile) EEG and body data more reliable and independent of the prior experience of the individual researchers, thus facilitating the use of EEG in general and MoBI in particular. It is an open-source project available for download at https://github.com/BeMoBIL/bemobil-pipeline which allows for community-driven adaptations in the future.
Szinte, M.; Bach, D. R.; Draschkow, D.; Esteban, O.; Gagl, B.; Gau, R.; Gregorova, K.; Halchenko, Y. O.; Huberty, S.; Huberty, S.; Kling, S. M.; Kulkarni, S.; Markiewicz, C. J.; Mikkelsen, M.; Oostenveld, R.; Oostenveld, R.; Pfarr, J.-K.
Show abstract
The Brain Imaging Data Structure (BIDS) is a widely adopted, community-driven standard to organize neuroimaging data and metadata. Although numerous extensions have been developed to incrementally extend coverage to new modalities and data types, an unambiguous, granular specification for eye-tracking recordings is lacking. Here, we present how BIDS will structure data and metadata produced by eye-tracking devices, including gaze position and pupil data. In addition to prescribing the organization of the unprocessed (raw) recordings and associated metadata as produced by the device, BEP20 also resolves gaps in current BIDS specifications beyond the scope of eye tracking. In particular, it adds a mechanism for including asynchronous model parameters and messages, such as contextual information, statuses, and events, such as triggers, generated by the device. BEP20 includes examples that illustrate its applicability in various experimental settings. This BIDS extension provides a robust standard that supports the development of self-adaptive, open, and automated eye-tracking data structures, thereby bolstering transparency and reliability of results in this field.
Coath, W.; Modat, M.; Cardoso, M. J.; Markiewicz, P.; Lane, C. A.; Parker, T. D.; Keshavan, A.; Buchanan, S. M.; Keuss, S. E.; Harris, M. J.; Burgos, N.; Dickson, J.; Barnes, A.; Thomas, D. L.; Beasley, D.; Malone, I. B.; Wong, A.; Erlandsson, K.; Thomas, B. A.; Schöll, M.; Ourselin, S.; Richards, M.; Fox, N. C.; Schott, J. M.; Cash, D. M.
Show abstract
PurposeThe Centiloid scale provides a systematic means of harmonising amyloid-{beta} PET measures across different acquisition and processing methodologies. This work explores the Centiloid transformation of [18F]florbetapir PET data acquired on a combined PET/MR scanner and processed with methods that differ from the standard Centiloid pipeline. MethodsThe Standard PiB and Florbetapir Calibration datasets were processed using a standardised uptake value ratio (SUVR) pipeline with MRI parcellations from the Geodesic Information Flow (GIF) algorithm in native PET space. We generated SUVRs using whole cerebellum (GIF_WCSUVR) and eroded white matter (GIF_WMSUVR) reference regions, with and without partial volume correction (PVC). Linear regression was used to calibrate these processing pipelines to the standard Centiloid approach. We then applied the resulting transformation to 432 florbetapir scans from the Insight 46 study of mostly cognitively normal individuals aged [~]70 years, and defined Centiloid cutpoints for amyloid-{beta} positivity using Gaussian-mixture modelling. ResultsGIF-based SUVR processing pipelines were suitable for conversion according to Centiloid criteria. For GIF_WCSUVR, cutpoints translated to 14.2 Centiloids, or 11.8 with PVC. There was a differential relationship between florbetapir uptake in WM and WC regions in Florbetapir Calibration and Insight 46 datasets, causing implausibly low Centiloid values for GIF_WMSUVR. Linear adjustment to account for this difference resulted in Centiloid cutpoints of 18.1 for GIF_WMSUVR (17.0 with PVC). ConclusionOur results show florbetapir SUVRs acquired on PET/MR scanners can be reliably converted to Centiloids. Acquisition or biological factors can have large effects on Centiloid values from different datasets, we propose a correction to account for these effects.
Stuttgen, M. C.
Show abstract
AO_SCPLOWBSTRACTC_SCPLOWThe analysis of neurophysiological data obtained from extracellular recordings is usually performed using a number of standard techniques. These include a) the extraction of action potentials from voltage traces and their subsequent classification, i.e., spike sorting, b) the visualization of activity, e.g., by constructing raster plots, peri-stimulus time histograms (PSTHs), and spike density functions, and c) the quantification of neuronal responses according to experimental variables such as stimulation or movement. Here I present a Matlab toolbox containing functions for the visualization and analysis of neuronal spike data. The toolbox consists entirely of one-liners that operate on vector or matrix inputs, i.e., spike and event timestamps or waveform samples. The toolbox functions provide both basic (constructing PSTHs, computing waveform characteristics etc.) and more advanced functionality, such as dimensionality reduction of multi-neuron recordings. While offering a high degree of versatility, the toolbox should also be accessible to newcomers to neurophysiology, such as (under)graduate students or PhD students. The functions are streamlined, easy to use, and each function is extensively introduced with several examples using real or simulated data. In addition, many functions provide fully formatted plots on request, even with minimal Matlab knowledge. The toolbox is available from https://github.com/maikstue/mlib-spike-data.
Artesani, A.
Show abstract
BackgroundThe Patlak graphical analysis is a widely used method for quantifying irreversible tracer uptake in dynamic Positron Emission Tomography (PET) studies, providing key kinetic parameters such as the net influx rate (Ki) and the total distribution volume (Vd). Currently, this analysis is primarily performed using proprietary software or vendor-specific workstations, which can limit its accessibility, flexibility, and reproducibility. This work presents pyPatlak, an open-source, Python-based, platform-independent tool that performs Patlak modelling directly from dynamic PET DICOM data, offering an accessible alternative to proprietary software. MethodsThe pyPatlak script processes dynamic PET data in DICOM format to generate 3D parametric images of Ki and Vd. Key features include the normalization of a population-based input function (PBIF) with a patient-specific arterial input function (AIF), and optional correction for the partial volume effect (PVE). The script performs a voxel-by-voxel linear regression to derive the kinetic parameters. To validate the script, we compared the Ki and Vd values generated by pyPatlak with those obtained from a commercial workstations direct Patlak analysis. This validation was performed on 21 patients by segmenting seven organs of interest and comparing the values of the kinetic parameters. ResultsPyPatlak showed a good agreement with the reference direct Patlak reconstruction. Correlation analysis demonstrated strong linear relationships (Ki: R = 0.91, Vd: R = 0.93), and Lins concordance coefficients confirmed high agreement (Ki: 0.89, Vd: 0.91). Bland-Altman analysis indicated that observed differences were minimal and clinically negligible. Mean biases were approximately -0.03 ml/min/100ml for Ki and +2.2 units for Vd. Equivalence testing further confirmed that all differences fell within predefined clinically acceptable limits, despite being statistically significant in Wilcoxon signed-rank tests. ConclusionpyPatlak offers a flexible, reproducible, and transparent alternative to proprietary software for Patlak analysis. Its open-source nature and compatibility with standard DICOM data make it a valuable tool for researchers, promoting greater accessibility and standardization of kinetic modelling in PET imaging.
Lankarany, M.; Azzalini, L. J.
Show abstract
Neuromorphic chips are well-suited for the exploration of neuronal dynamics in (near) real-time. In order to port existing research onto these chips, relevant models of neuronal and synaptic dynamics first need to be supported by their respective development environments and validated against existing simulator backends. At the time of writing, support for short-term synaptic plasticity on neuromorphic hardware is scarce. This technical paper proposes an implementation of dynamic synapses for the SpiNNaker development environment based on the popular synaptic plasticity model by Tsodyks and Markram (TM). This extension is undertaken in the context of existing research on neuromodulation and the study of deep brain stimulation (DBS) effects on singular-neuron responses. The implementation of the TM synapse is first detailed and then, simulated for various response types. Its role in studies of DBS effect on postsynaptic responses is also reviewed. Finally, given the real-time capabilities offered by the hardware, we provide some insight to lay the groundwork for future explorations of closed-loop DBS on neuromorphic chips.
Mininni, C. J.; Zanutto, B. S.
Show abstract
A main goal in neuroscience is to understand the computations carried out by neural populations that give animals their cognitive skills. Neural network models allow to formulate explicit hypothesis regarding the algorithms instantiated in the dynamic of a neural population, its firing statistics, and the underlying connectivity. Neural networks can be defined by a small set of parameters, carefully chosen to procure specific capabilities, or by a large set of free parameters, fitted with optimization algorithms that minimize a given loss function. In this work we alternatively propose a method to make a detailed adjustment of the network dynamic and firing statistic to better answer questions that link dynamic, structure and function. Our algorithm - termed generalized Firing-to-Parameter (gFTP) - provides a way to construct binary recurrent neural networks whose dynamic strictly follows a user pre-specified transition graph that details the transitions between population firing states triggered by stimulus presentations. Our main contribution is a procedure that detects when a transition graph is not realizable in terms of a neural network, and makes the necessary modifications in order to obtain a new transition graph that is realizable and preserves all the information encoded in the transitions of the original graph. With a realizable transition graph, gFTP assigns values to the network firing states associated with each node in the graph, and finds the synaptic weight matrices by solving a set of linear separation problems. We test gFTP performance by constructing networks with random dynamics, continuous attractor-like dynamics that encode position in 2-dimensional space, and discrete attractor dynamics. We then show how gFTP can be employed as a tool to explore the broad dependencies between structure and function, and the specific dependencies subserving the algorithms instantiated in the network activity.
Ferreiro, E.; Rodriguez-Iglesias, N.; Cardoso, J.; Valero, J.
Show abstract
Volume estimations are crucial for many neuroscience studies, allowing the evaluation of changes in the size of brain areas that may have relevant functional consequences. Classical histological methods and modern human brain imaging techniques rely on obtaining physical or digital sections, with a known thickness, of the organ to be analyzed. This "slicing" strategy is associated with an ineludible loss of information about the three-dimensional organization of the analyzed structures, especially affecting the precision of volumetric measurements. To overcome this problem, several methods have been developed. One of the most commonly used approaches for volume estimation is the classical Cavalieris method. Within this book chapter, we provide first an overview of Cavalieris method and propose a new one, named the Truncated Cone Shape (TCS) method, for the estimation of volumes from tissue sections. Second, we compare the accuracy of both methods using computer-generated objects of different shapes and sizes. We conclude that, more frequently, the TCS method provides a better estimate of real volumes than Cavalieris method. And third, we describe a protocol to estimate volumes using a self-developed and freely available tool for ImageJ: VolumestJ (https://github.com/Jorvalgl/VolumestJ). This new tool helps to implement both Cavalieris and TCS methods using digital images of tissue sections. We consider that VolumestJ will facilitate the labor of researchers interested in volume estimations.
Thomas Homescu, A.; Murray, T.
Show abstract
We describe an interactive visualizer (implemented in R Shiny framework) to facilitate analysis and a better understanding of neurotransmitter data collected within the context of epileptic seizures. Given the very high granularity of collected data (at millisecond level), it is challenging to use static visuals and/or tables for deeper data insights and features. Such challenges are greatly alleviated through an interactive visualizer (dashboard) which has ability to zoom out (for "big picture" analysis) and to zoom in (for a much more focused and targeted targeted analysis). The visualizer is available at link https://kittyviz.shinyapps.io/GluGabaViz
Larsen, L. B.; Stagsted, R.; Strohmer, B.; Christensen, A. L.
Show abstract
Neuromorphic computing currently relies heavily on complicated hardware design to implement asynchronous, parallel and very large-scale brain simulations. This dependency slows down the migration of biological insights into technology. It typically takes several years from idea to finished hardware and once developed the hardware is not broadly available to the community. In this contribution, we present the CloudBrain research platform, an alternative based on modern cloud computing and event stream processing technology. Typical neuromorphic design goals, such as small form factor and low power consumption, are traded for 1) no constraints on the model elements, 2) access to all events and parameters during and after the simulation, 3) online reconfiguration of the network, and 4) real-time simulation. We explain principles for how neuron, synapse and network models can be implemented and we demonstrate that our implementation can be used to control a physical robot in real-time. CloudBrain is open source and can run on commodity hardware or in the cloud, thus providing the community a new platform with a different set of features supporting research into, for example, neuron models, structural plasticity and three-factor learning.
Layer, M.; Senk, J.; Essink, S.; van Meegen, A.; Bos, H.; Helias, M.
Show abstract
Mean-field theory of spiking neuronal networks has led to numerous advances in our analytical and intuitive understanding of the dynamics of neuronal network models during the past decades. But, the elaborate nature of many of the developed methods, as well as the difficulty of implementing them, may limit the wider neuroscientific community from taking maximal advantage of these tools. In order to make them more accessible, we implemented an extensible, easy-to-use open-source Python toolbox that collects a variety of mean-field methods for the widely used leaky integrate-and-fire neuron model. The Neuronal Network Mean-field Toolbox (NNMT) in its current state allows for estimating properties of large neuronal networks, such as firing rates, power spectra, and dynamical stability in mean-field and linear response approximation, without running simulations on high performance systems. In this article we describe how the toolbox is implemented, show how it is used to calculate neuronal network properties, and discuss different use-cases, such as extraction of network mechanisms, parameter space exploration, or hybrid modeling approaches. Although the initial version of the toolbox focuses on methods that are close to our own past and present research, its structure is designed to be open and extensible. It aims to provide a platform for collecting analytical methods for neuronal network model analysis and we discuss how interested scientists can share their own methods via this platform.
Jung, E. W.; Kashyap, A.; Hsu, B.; Moreland, M.; Chantaduly, C.; Chang, P.
Show abstract
PURPOSEAlzheimers Disease (AD) is a neurodegenerative disease that progressively deteriorates memory and cognitive abilities. PET 18F-AV45 (florbetapir) is a common imaging modality used to characterize the distribution of beta-amyloid deposits in the brain, however interpretation may be subjective and the misdiagnosis rate of AD ranges from 12-23%. Automated algorithms for PET 18F-AV45 interpretation including those derived from deep learning may facilitate more objective and accurate AD diagnosis. MATERIALS & METHODSA total of 1232 PET AV45 scans (207 - AD; 1025 - normal) were obtained from the Alzheimers Disease Neuroimaging Initiative (ADNI). A semi-supervised deep learning framework was developed to differentiate AD and normal patients. The framework consists of an autoencoder (AE), a contrastive learning loss, and a categorical classification head. A contrastive learning paradigm is used to improve the discriminative properties of latent feature vectors in multidimensional space. RESULTSUpon five-fold cross-validation, the best-performing semi-supervised contrastive model achieved validation accuracy of 82% to 86%. Secondary analysis included visualization of intermediate activations, classification report verification, and principal component analysis (PCA) of latent feature vectors. The training process yielded optimal converging losses for all three loss frameworks. CONCLUSIONA deep learning model can accurately diagnose AD using PET 18F-AV45 scans. Such models require large amounts of labeled data during training. The use of a semi-supervised contrastive learning objective and AE regularizer helps to improve model performance, especially when dataset sizes are constrained. Latent representations extracted by the model are visually clustered strongly with the addition of a contrastive learning mechanism. Summary StatementA semi-supervised contrastive learning deep learning system optimizes latent feature vector representations and yields strong model classification performance for larger data distributions within the Alzheimers Disease diagnostics domain. Key PointsO_LIA common diagnostic procedure used by trained radiologists in the clinical setting is the visual analysis of PET 18F-AV45 neuroimaging scans to diagnose the different stages of Alzheimers Disease in a patient. C_LIO_LIContrastive learning is a strategy that allows for the optimization of latent feature representations in multidimensional space through the use of a loss function that maximizes the distance between feature vectors of different classes and minimizes the distance of feature vectors of the same class. C_LIO_LIA semi-supervised contrastive learning approach can lead to improved performance and generalization of deep learning models optimized using small training datasets as encountered in Alzheimers Disease and other neurodegenerative conditions. C_LI